251 research outputs found

    Design optimization and application of bolt-shotcrete support for East Tianshan tunnel project in China

    Get PDF
    Bolt-shotcrete support is a form of support with low cost, convenient for construction, uniform structural stress, which is widely used in international tunnel engineering. In this paper, the 2# inclined shaft of East Tianshan tunnel in China is taken as the research object. The stress characteristics of composite lining support and bolt-shotcrete support are analyzed and compared by FLAC3D software, and the bolt-shotcrete support scheme suitable for this project is put forward. Based on the principle of orthogonal experiment, the most reasonable shotcrete material proportion is selected, and structural stress and displacement monitoring is carried out during the construction stage of typical sections. The results show that: (1) in FLAC3D simulation calculation, the interface element is applied between different layers, which can simulate the interaction between different layers of lining structure and reflect the mechanical characteristics and displacement characteristics of the interface between layers; (2) from the aspect of mechanical performance, single layer lining which can meet the requirements of tunnel support with thinner structural thickness and has higher economic efficiency, is better than composite lining; (3) the field monitoring results show that the deformation of bolt-shotcrete support structure is small, the structural stress meets the material performance requirements, and there is no structural damage during the construction of the test section; (4) during the implementation of bolt-shotcrete support, the cost of support per meter is reduced by 36.78%, and the average excavation efficiency is increased by 38.9%, which verifies the applicability and advantages of the optimization scheme. The research results in this paper can provide reference for the follow-up construction of tunnels and similar projects. Please click Additional Files below to see the full abstract

    LOT: Layer-wise Orthogonal Training on Improving â„“2\ell_2 Certified Robustness

    Full text link
    Recent studies show that training deep neural networks (DNNs) with Lipschitz constraints are able to enhance adversarial robustness and other model properties such as stability. In this paper, we propose a layer-wise orthogonal training method (LOT) to effectively train 1-Lipschitz convolution layers via parametrizing an orthogonal matrix with an unconstrained matrix. We then efficiently compute the inverse square root of a convolution kernel by transforming the input domain to the Fourier frequency domain. On the other hand, as existing works show that semi-supervised training helps improve empirical robustness, we aim to bridge the gap and prove that semi-supervised learning also improves the certified robustness of Lipschitz-bounded models. We conduct comprehensive evaluations for LOT under different settings. We show that LOT significantly outperforms baselines regarding deterministic l2 certified robustness, and scales to deeper neural networks. Under the supervised scenario, we improve the state-of-the-art certified robustness for all architectures (e.g. from 59.04% to 63.50% on CIFAR-10 and from 32.57% to 34.59% on CIFAR-100 at radius rho = 36/255 for 40-layer networks). With semi-supervised learning over unlabelled data, we are able to improve state-of-the-art certified robustness on CIFAR-10 at rho = 108/255 from 36.04% to 42.39%. In addition, LOT consistently outperforms baselines on different model architectures with only 1/3 evaluation time.Comment: NeurIPS 202

    SoK: Certified Robustness for Deep Neural Networks

    Full text link
    Great advances in deep neural networks (DNNs) have led to state-of-the-art performance on a wide range of tasks. However, recent studies have shown that DNNs are vulnerable to adversarial attacks, which have brought great concerns when deploying these models to safety-critical applications such as autonomous driving. Different defense approaches have been proposed against adversarial attacks, including: a) empirical defenses, which can usually be adaptively attacked again without providing robustness certification; and b) certifiably robust approaches, which consist of robustness verification providing the lower bound of robust accuracy against any attacks under certain conditions and corresponding robust training approaches. In this paper, we systematize certifiably robust approaches and related practical and theoretical implications and findings. We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets. In particular, we 1) provide a taxonomy for the robustness verification and training approaches, as well as summarize the methodologies for representative algorithms, 2) reveal the characteristics, strengths, limitations, and fundamental connections among these approaches, 3) discuss current research progresses, theoretical barriers, main challenges, and future directions for certifiably robust approaches for DNNs, and 4) provide an open-sourced unified platform to evaluate 20+ representative certifiably robust approaches.Comment: To appear at 2023 IEEE Symposium on Security and Privacy (SP); 14 pages for the main text; benchmark & tool website: http://sokcertifiedrobustness.github.io

    Fairness in Federated Learning via Core-Stability

    Full text link
    Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data while protecting data privacy. Nonetheless, the heterogeneity nature of distributed data makes it challenging to define and ensure fairness among local agents. For instance, it is intuitively "unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data. Currently popular egalitarian and weighted equity-based fairness measures suffer from the aforementioned pitfall. In this work, we aim to formally represent this problem and address these fairness issues using concepts from co-operative game theory and social choice theory. We model the task of learning a shared predictor in the federated setting as a fair public decision making problem, and then define the notion of core-stable fairness: Given NN agents, there is no subset of agents SS that can benefit significantly by forming a coalition among themselves based on their utilities UNU_N and USU_S (i.e., ∣S∣NUS≥UN\frac{|S|}{N} U_S \geq U_N). Core-stable predictors are robust to low quality local data from some agents, and additionally they satisfy Proportionality and Pareto-optimality, two well sought-after fairness and efficiency notions within social choice. We then propose an efficient federated learning protocol CoreFed to optimize a core stable predictor. CoreFed determines a core-stable predictor when the loss functions of the agents are convex. CoreFed also determines approximate core-stable predictors when the loss functions are not convex, like smooth neural networks. We further show the existence of core-stable predictors in more general settings using Kakutani's fixed point theorem. Finally, we empirically validate our analysis on two real-world datasets, and we show that CoreFed achieves higher core-stability fairness than FedAvg while having similar accuracy.Comment: NeurIPS 2022; code: https://openreview.net/attachment?id=lKULHf7oFDo&name=supplementary_materia

    Certifying Out-of-Domain Generalization for Blackbox Functions

    Full text link
    Certifying the robustness of model performance under bounded data distribution drifts has recently attracted intensive interest under the umbrella of distributional robustness. However, existing techniques either make strong assumptions on the model class and loss functions that can be certified, such as smoothness expressed via Lipschitz continuity of gradients, or require to solve complex optimization problems. As a result, the wider application of these techniques is currently limited by its scalability and flexibility -- these techniques often do not scale to large-scale datasets with modern deep neural networks or cannot handle loss functions which may be non-smooth such as the 0-1 loss. In this paper, we focus on the problem of certifying distributional robustness for blackbox models and bounded loss functions, and propose a novel certification framework based on the Hellinger distance. Our certification technique scales to ImageNet-scale datasets, complex models, and a diverse set of loss functions. We then focus on one specific application enabled by such scalability and flexibility, i.e., certifying out-of-domain generalization for large neural networks and loss functions such as accuracy and AUC. We experimentally validate our certification method on a number of datasets, ranging from ImageNet, where we provide the first non-vacuous certified out-of-domain generalization, to smaller classification tasks where we are able to compare with the state-of-the-art and show that our method performs considerably better.Comment: 39th International Conference on Machine Learning (ICML) 202

    Reliability Assurance for Deep Neural Network Architectures Against Numerical Defects

    Full text link
    With the widespread deployment of deep neural networks (DNNs), ensuring the reliability of DNN-based systems is of great importance. Serious reliability issues such as system failures can be caused by numerical defects, one of the most frequent defects in DNNs. To assure high reliability against numerical defects, in this paper, we propose the RANUM approach including novel techniques for three reliability assurance tasks: detection of potential numerical defects, confirmation of potential-defect feasibility, and suggestion of defect fixes. To the best of our knowledge, RANUM is the first approach that confirms potential-defect feasibility with failure-exhibiting tests and suggests fixes automatically. Extensive experiments on the benchmarks of 63 real-world DNN architectures show that RANUM outperforms state-of-the-art approaches across the three reliability assurance tasks. In addition, when the RANUM-generated fixes are compared with developers' fixes on open-source projects, in 37 out of 40 cases, RANUM-generated fixes are equivalent to or even better than human fixes.Comment: To appear at 45th International Conference on Software Engineering (ICSE 2023), camera-ready versio
    • …
    corecore